15 research outputs found

    A new approach to seasonal energy consumption forecasting using temporal convolutional networks

    Get PDF
    There has been a significant increase in the attention paid to resource management in smart grids, and several energy forecasting models have been published in the literature. It is well known that energy forecasting plays a crucial role in several applications in smart grids, including demand-side management, optimum dispatch, and load shedding. A significant challenge in smart grid models is managing forecasts efficiently while ensuring the slightest feasible prediction error. A type of artificial neural networks such as recurrent neural networks, are frequently used to forecast time series data. However, due to certain limitations like vanishing gradients and lack of memory retention of recurrent neural networks, sequential data should be modeled using convolutional networks. The reason is that they have strong capabilities to solve complex problems better than recurrent neural networks. In this research, a temporal convolutional network is proposed to handle seasonal short-term energy forecasting. The proposed temporal convolutional network computes outputs in parallel, reducing the computation time compared to the recurrent neural networks. Further performance comparison with the traditional long short-term memory in terms of MAD and sMAPE has proved that the proposed model has outperformed the recurrent neural network

    Forecasting Energy Consumption Demand of Customers in Smart Grid Using Temporal Fusion Transformer (TFT)

    Get PDF
    Energy consumption prediction has always remained a concern for researchers because of the rapid growth of the human population and customers joining smart grids network for smart home facilities. Recently, the spread of COVID-19 has dramatically increased energy consumption in the residential sector. Hence, it is essential to produce energy per the residential customers\u27 requirements, improve economic efficiency, and reduce production costs. The previously published papers in the literature have considered the overall energy consumption prediction, making it difficult for production companies to produce energy per customers\u27 future demand. Using the proposed study, production companies can accurately have energy per their customers\u27 needs by forecasting future energy consumption demands. Scientists and researchers are trying to minimize energy consumption by applying different optimization and prediction techniques; hence this study proposed a daily, weekly, and monthly energy consumption prediction model using Temporal Fusion Transformer (TFT). This study relies on a TFT model for energy forecasting, which considers both primary and valuable data sources and batch training techniques. The model\u27s performance has been related to the Long Short-Term Memory (LSTM), LSTM interpretable, and Temporal Convolutional Network (TCN) models. The model\u27s performance has remained better than the other algorithms, with mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE) of 4.09, 2.02, and 1.50. Further, the overall symmetric mean absolute percentage error (sMAPE) of LSTM, LSTM interpretable, TCN, and proposed TFT remained at 29.78%, 31.10%, 36.42%, and 26.46%, respectively. The sMAPE of the TFT has proved that the model has performed better than the other deep learning models

    Short term energy consumption forecasting using neural basis expansion analysis for interpretable time series

    Get PDF
    Smart grids and smart homes are getting people\u27s attention in the modern era of smart cities. The advancements of smart technologies and smart grids have created challenges related to energy efficiency and production according to the future demand of clients. Machine learning, specifically neural network-based methods, remained successful in energy consumption prediction, but still, there are gaps due to uncertainty in the data and limitations of the algorithms. Research published in the literature has used small datasets and profiles of primarily single users; therefore, models have difficulties when applied to large datasets with profiles of different customers. Thus, a smart grid environment requires a model that handles consumption data from thousands of customers. The proposed model enhances the newly introduced method of Neural Basis Expansion Analysis for interpretable Time Series (N-BEATS) with a big dataset of energy consumption of 169 customers. Further, to validate the results of the proposed model, a performance comparison has been carried out with the Long Short Term Memory (LSTM), Blocked LSTM, Gated Recurrent Units (GRU), Blocked GRU and Temporal Convolutional Network (TCN). The proposed interpretable model improves the prediction accuracy on the big dataset containing energy consumption profiles of multiple customers. Incorporating covariates into the model improved accuracy by learning past and future energy consumption patterns. Based on a large dataset, the proposed model performed better for daily, weekly, and monthly energy consumption predictions. The forecasting accuracy of the N-BEATS interpretable model for 1-day-ahead energy consumption with day as covariates remained better than the 1, 2, 3, and 4-week scenarios

    Designing a relational model to identify relationships between suspicious customers in anti-money laundering (AML) using social network analysis (SNA)

    Get PDF
    The stability of the economy and political system of any country highly depends on the policy of anti-money laundering (AML). If government policies are incapable of handling money laundering activities in an appropriate way, the control of the economy can be transferred to criminals. The current literature provides various technical solutions, such as clustering-based anomaly detection techniques, rule-based systems, and a decision tree algorithm, to control such activities that can aid in identifying suspicious customers or transactions. However, the literature provides no effective and appropriate solutions that could aid in identifying relationships between suspicious customers or transactions. The current challenge in the field is to identify associated links between suspicious customers who are involved in money laundering. To consider this challenge, this paper discusses the challenges associated with identifying relationships such as business and family relationships and proposes a model to identify links between suspicious customers using social network analysis (SNA). The proposed model aims to identify various mafias and groups involved in money laundering activities, thereby aiding in preventing money laundering activities and potential terrorist financing. The proposed model is based on relational data of customer profiles and social networking functions metrics to identify suspicious customers and transactions. A series of experiments are conducted with financial data, and the results of these experiments show promising results for financial institutions who can gain real benefits from the proposed model

    Global economic burden of unmet surgical need for appendicitis

    Get PDF
    Background: There is a substantial gap in provision of adequate surgical care in many low-and middle-income countries. This study aimed to identify the economic burden of unmet surgical need for the common condition of appendicitis. Methods: Data on the incidence of appendicitis from 170 countries and two different approaches were used to estimate numbers of patients who do not receive surgery: as a fixed proportion of the total unmet surgical need per country (approach 1); and based on country income status (approach 2). Indirect costs with current levels of access and local quality, and those if quality were at the standards of high-income countries, were estimated. A human capital approach was applied, focusing on the economic burden resulting from premature death and absenteeism. Results: Excess mortality was 4185 per 100 000 cases of appendicitis using approach 1 and 3448 per 100 000 using approach 2. The economic burden of continuing current levels of access and local quality was US 92492millionusingapproach1and92 492 million using approach 1 and 73 141 million using approach 2. The economic burden of not providing surgical care to the standards of high-income countries was 95004millionusingapproach1and95 004 million using approach 1 and 75 666 million using approach 2. The largest share of these costs resulted from premature death (97.7 per cent) and lack of access (97.0 per cent) in contrast to lack of quality. Conclusion: For a comparatively non-complex emergency condition such as appendicitis, increasing access to care should be prioritized. Although improving quality of care should not be neglected, increasing provision of care at current standards could reduce societal costs substantially

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    Global variation in anastomosis and end colostomy formation following left-sided colorectal resection

    Get PDF
    Background End colostomy rates following colorectal resection vary across institutions in high-income settings, being influenced by patient, disease, surgeon and system factors. This study aimed to assess global variation in end colostomy rates after left-sided colorectal resection. Methods This study comprised an analysis of GlobalSurg-1 and -2 international, prospective, observational cohort studies (2014, 2016), including consecutive adult patients undergoing elective or emergency left-sided colorectal resection within discrete 2-week windows. Countries were grouped into high-, middle- and low-income tertiles according to the United Nations Human Development Index (HDI). Factors associated with colostomy formation versus primary anastomosis were explored using a multilevel, multivariable logistic regression model. Results In total, 1635 patients from 242 hospitals in 57 countries undergoing left-sided colorectal resection were included: 113 (6·9 per cent) from low-HDI, 254 (15·5 per cent) from middle-HDI and 1268 (77·6 per cent) from high-HDI countries. There was a higher proportion of patients with perforated disease (57·5, 40·9 and 35·4 per cent; P < 0·001) and subsequent use of end colostomy (52·2, 24·8 and 18·9 per cent; P < 0·001) in low- compared with middle- and high-HDI settings. The association with colostomy use in low-HDI settings persisted (odds ratio (OR) 3·20, 95 per cent c.i. 1·35 to 7·57; P = 0·008) after risk adjustment for malignant disease (OR 2·34, 1·65 to 3·32; P < 0·001), emergency surgery (OR 4·08, 2·73 to 6·10; P < 0·001), time to operation at least 48 h (OR 1·99, 1·28 to 3·09; P = 0·002) and disease perforation (OR 4·00, 2·81 to 5·69; P < 0·001). Conclusion Global differences existed in the proportion of patients receiving end stomas after left-sided colorectal resection based on income, which went beyond case mix alone

    A proximity-aware semantic-based decentralized resource discovery framework for computational grids

    No full text
    Resource discovery is a service of Grid resource management that is considered as a core part of computational Grids. The service is one of the fundamental requirements of highly dynamic and heterogeneous computational Grids, which deals with providing appropriate resources for users to meet their jobs requirements. Existing research reveals that centralized and hierarchical resource discovery models can perform poorly for large-size Grids, because of various limitations. As a result, a decentralized resource discovery is recommended for large-size Grids. However, a lack of coordination between users and providers in a decentralized computational Grid environment often results in user jobs failing to find appropriate resources. One of the reasons for rejection of jobs is the usage of fixed schema between user requests and providers' availability that can affect the overall performance of the Grid. The resource discovery performance in decentralized Grid environment can be categorized based on four main drawbacks – high communication overheads, low job success probability, high latency and low resource utilization. To eliminate these drawbacks and improve performance, a proximity-aware semantic-based decentralized resource discovery framework is proposed for computational Grids. The decentralized resource discovery framework is developed and implemented incrementally using a combination of both semantic and proximity criteria. Extensive experimental analysis indicates that Pastry-based resource discovery model outperforms Chord-based one in terms of reducing communication overheads. To increase job success probability in decentralized resource discovery, the thesis extends current semantic techniques by presenting a sub-domain based ontological structure. Towards this end, a semantic-based decentralized resource discovery model is developed and implemented. For further enhancement of the applications performance and reduction of latency, the thesis proposes and designs the UPSARS (Unification of Proximity and Semantic for Appropriate Resource Selection) model for the sub-domain based semantic decentralized resource discovery. The experimental results of the UPSARS indicate that the model can reduce latency by allowing an application to allocate resources in proximity. Finally, to improve resource utilization, the thesis proposes a fuzzy-based framework for the selection of the most suitable resources by adding not only semantic and proximity factors, but also including the other parameters of the matched Grid resources such as number of machines and number of processors. Overall, the proximity-aware semantic-based decentralized resource discovery framework reduces communication overheads, enhances job success probability, reduces latency and improves resource utilization in a computational Grid environment

    A proximity-aware semantic-based decentralized resource discovery framework for computational grids

    No full text
    Resource discovery is a service of Grid resource management that is considered as a core part of computational Grids. The service is one of the fundamental requirements of highly dynamic and heterogeneous computational Grids, which deals with providing appropriate resources for users to meet their jobs requirements. Existing research reveals that centralized and hierarchical resource discovery models can perform poorly for large-size Grids, because of various limitations. As a result, a decentralized resource discovery is recommended for large-size Grids. However, a lack of coordination between users and providers in a decentralized computational Grid environment often results in user jobs failing to find appropriate resources. One of the reasons for rejection of jobs is the usage of fixed schema between user requests and providers' availability that can affect the overall performance of the Grid. The resource discovery performance in decentralized Grid environment can be categorized based on four main drawbacks – high communication overheads, low job success probability, high latency and low resource utilization. To eliminate these drawbacks and improve performance, a proximity-aware semantic-based decentralized resource discovery framework is proposed for computational Grids. The decentralized resource discovery framework is developed and implemented incrementally using a combination of both semantic and proximity criteria. Extensive experimental analysis indicates that Pastry-based resource discovery model outperforms Chord-based one in terms of reducing communication overheads. To increase job success probability in decentralized resource discovery, the thesis extends current semantic techniques by presenting a sub-domain based ontological structure. Towards this end, a semantic-based decentralized resource discovery model is developed and implemented. For further enhancement of the applications performance and reduction of latency, the thesis proposes and designs the UPSARS (Unification of Proximity and Semantic for Appropriate Resource Selection) model for the sub-domain based semantic decentralized resource discovery. The experimental results of the UPSARS indicate that the model can reduce latency by allowing an application to allocate resources in proximity. Finally, to improve resource utilization, the thesis proposes a fuzzy-based framework for the selection of the most suitable resources by adding not only semantic and proximity factors, but also including the other parameters of the matched Grid resources such as number of machines and number of processors. Overall, the proximity-aware semantic-based decentralized resource discovery framework reduces communication overheads, enhances job success probability, reduces latency and improves resource utilization in a computational Grid environment
    corecore